Latest news with #artificial general intelligence


The Guardian
17-07-2025
- Business
- The Guardian
AI firms ‘unprepared' for dangers of building human-level systems, report warns
Artificial intelligence companies are 'fundamentally unprepared' for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for 'existential safety planning'. One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had 'anything like a coherent, actionable plan' to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI 'benefits all of humanity'. Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event. The FLI's report said: 'The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning.' The index evaluates seven AI developers – Google DeepMind, OpenAI, Anthropic, Meta, xAI and China's Zhipu AI and DeepSeek – across six areas including 'current harms' and 'existential safety'. Anthropic received the highest overall safety score with a C+, followed by OpenAI with a C and Google DeepMind with a C-. The FLI is a US-based non-profit that campaigns for safer use of cutting-edge technology and is able to operate independently due to an 'unconditional' donation from crypto entrepreneur Vitalik Buterin. SaferAI, another safety-focused non-profit, also released a report on Thursday warning that advanced AI companies have 'weak to very weak risk management practices' and labelled their current approach 'unacceptable'. The FLI safety grades were assigned and reviewed by a panel of AI experts, including British computer scientist Stuart Russell, and Sneha Revanur, founder of AI regulation campaign group Encode Justice. Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was 'pretty jarring' that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences. He said: 'It's as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.' Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. 'Now the companies themselves are saying it's a few years away,' he said. He added that progress in AI capabilities had been 'remarkable' since the global AI summit in Paris in February, with new models such as xAI's Grok 4, Google's Gemini 2.5, and its video generator Veo3, all showing improvements on their forebears. A Google DeepMind spokesperson said the reports did not take into account 'all of Google DeepMind's AI safety efforts'. They added: 'Our comprehensive approach to AI safety and security extends well beyond what's captured.' OpenAI, Anthropic, Meta, xAI, Zhipu AI and DeepSeek have also been approached for comment.


The Guardian
17-07-2025
- Business
- The Guardian
AI firms ‘unprepared' for dangers of building human-level systems, report warns
Artificial intelligence companies are 'fundamentally unprepared' for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for 'existential safety planning'. One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had 'anything like a coherent, actionable plan' to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI 'benefits all of humanity'. Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event. The FLI's report said: 'The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning.' The index evaluates seven AI developers – Google DeepMind, OpenAI, Anthropic, Meta, xAI and China's Zhipu AI and DeepSeek – across six areas including 'current harms' and 'existential safety'. Anthropic received the highest overall safety score with a C+, followed by OpenAI with a C and Google DeepMind with a C-. The FLI is a US-based non-profit that campaigns for safer use of cutting-edge technology and is able to operate independently due to an 'unconditional' donation from crypto entrepreneur Vitalik Buterin. SaferAI, another safety-focused non-profit, also released a report on Thursday warning that advanced AI companies have 'weak to very weak risk management practices' and labelled their current approach 'unacceptable'. The FLI safety grades were assigned and reviewed by a panel of AI experts, including British computer scientist Stuart Russell, and Sneha Revanur, founder of AI regulation campaign group Encode Justice. Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was 'pretty jarring' that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences. He said: 'It's as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.' Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. 'Now the companies themselves are saying it's a few years away,' he said. He added that progress in AI capabilities had been 'remarkable' since the global AI summit in Paris in February, with new models such as xAI's Grok 4, Google's Gemini 2.5, and its video generator Veo3, all showing improvements on their forebears. A Google DeepMind spokesperson said the reports did not take into account 'all of Google DeepMind's AI safety efforts'. They added: 'Our comprehensive approach to AI safety and security extends well beyond what's captured.' OpenAI, Anthropic, Meta, xAI, Zhipu AI and DeepSeek have also been approached for comment.


Telegraph
17-07-2025
- Science
- Telegraph
China is preparing to steal the jobs of the future
They call it 'artificial general intelligence' (AGI) and it may be only a matter of a few years away. AGI is the imagined point at which computers finally achieve consciousness, surpass human intelligence and begin to self-improve across virtually all cognitive tasks at an accelerating rate. Sometimes referred to as the 'singularity', it's long been the stuff of science fiction, but many techies believe, such is the pace of current development, that it is on the verge of becoming a reality. Whoever gets there first, it is widely believed, will inherit the Earth, embedding their influence, ideology and systems of governance into world affairs for generations to come. It's a frightening as well as awe-inspiring prospect and it is one whose potentially transformational geopolitical consequences are only now starting to be more widely appreciated. And it's why the US and China are increasingly engaged in what can only be described as a new arms race – or space race – to develop and harness artificial superintelligence for economic and geopolitical superiority. In both jurisdictions, hundreds of billions of dollars a year are being poured into getting there first. Yet though the Trump administration is only too aware of the threat, its response is oddly backward-looking and counterproductive. Despite the apparent world lead the US has in supercomputing, there is a high chance it will end up losing the war.